- The New CCO Podcast
Listen on Apple Podcasts | Spotify | YouTube
In this episode, Lisa Kaplan offers CCOs a calming cup of chamomile tea and a strategy for being your organization's first responder: spotting sparks early and staying ahead of reputational risk.
AI-generated, some errors may appear.
[00:00:00]
Eliot Mizrachi: Disinformation, $7 bot networks, AI-powered smear campaigns. This isn't happening in the shadows anymore. It's hitting brands where it hurts the most Trust.
If you're a CCO, this isn't just noise. It's part of the job. Now, Lisa Kaplan didn't plan to become an expert in identifying disinformation until she had to.
While running digital strategy for a US Senate campaign, she saw firsthand how false narratives could bubble up, go viral and spiral into real world consequences.
Lisa Kaplan: I realize I'm gonna be saying a lot of scary things. like, it's not all bad, guys like, it's solvable,
Eliot Mizrachi: I'm Eliot Mizrachi, and this is The New CCO.
Eliot Mizrachi: We start with the shift. Disinformation used to be something companies watched from afar. Often through the lens of national [00:01:00] security or politics. For most communicators, the response to reputational threats starts with monitoring, but as Lisa explains, the landscape has changed.
Lisa Kaplan: I mean, look, lying to achieve your goal is not a new and novel concept.
It's not ethical, but it's something that, has been happening for thousands of years. I think one of the things that's interesting about disinformation and misinformation and just how the online risk space has evolved. Is, a lot of times these tactics really start as military operations.
This is the online version of dropping leaflets out of helicopters in Vietnam. The difference is it's much more effective online because it's one fast, cheap, and easy to do. You don't need to fly a helicopter, you just need to make a meme. And so one of the things that we oftentimes will see is that.
Yes, there are these sophisticated state actors like China, like Russia, Iran has started getting into this space more who [00:02:00] are investing millions of dollars in running influence operations. And so is the United States government. By the way. This is something that over 80 countries are actively doing.
I think a key difference though, and one of the reasons, many reasons we've seen a proliferation of this type of risk is, again, we're talking about making memes not enriching uranium. And so when you think about the risk that. Individuals who are not part of governments who maybe Ill have ill intentions who might be part of frankly criminal groups or otherwise trying to make a buck and are financially motivated.
We find that they are building out these different networks to be able to achieve their own goals. And that could mean that. You know, the tactics that we see being used by Russia to reduce support for Ukraine, to target American brands that it views as competitive to Russia's interest. Those same tactics we oftentimes see being used by groups attempting to short a stock around an earnings call.
Or we will see [00:03:00] efforts to degrade an organization's brand trust with its stakeholders to be able to cause brand boycotts. spam and scam are coordinated inauthentic behavior as it relates to some of these more sophisticated influence operations.
And so being able to understand where information is coming from and how it's spreading is the important intelligence that, communications professionals need now in order to be able to successfully both protect the brand, but identify opportunities to ride certain waves or get out in front of issues or be able to expose and add more context, which is ultimately the way that we see organizations successfully combat these risks.
And so I started Alethea a little over six years ago to be able to help companies, brands, and organizations. Figure out how to defend their own reputations, their own ability to communicate with their stakeholders and keep [00:04:00] their brand safe and maintain their license to operate.
Eliot Mizrachi: you used a phrase there, coordinated inauthentic behavior. What does that mean?
Lisa Kaplan: So that is actually a term of art used in the online risk space. And I believe it was actually originally coined by Meta as a platform. It might have been one of the other large social media platforms, but coordinated and inauthentic behavior refers to a certain type of network.
Oftentimes that might be propagating information or trying to essentially hack curation algorithms to make it so that social media users are unwittingly shown nefarious state backed propaganda. Or, since other entities can also copy these tactics that may not necessarily be any more just state backed.
We sometimes see companies doing this to each other unfortunately, but. What it refers to is really two things. So the coordinated piece refers to there's typically [00:05:00] some level of effort to try to gain curation algorithms. This can look like using a series of fake accounts, whether that's sock puppets or.
Bot networks. And one of my favorite fun facts and this research came out of the Alliance for Securing Democracy is that you can buy a bot network for $7 out of a vending machine in St. Petersburg. Exactly. But the idea is that there are some sort of fake accounts or manipulative, deceitful tactics that are coordinating this.
Spread of certain information. The inauthentic piece though refers to the inauthentic accounts, so that's the nature of the accounts. They're not real people. It's not as though everybody is organically coming together to post at the same time to show support. So that coordinated inauthentic behavior is almost like platform speak, using air quotes for what could likely be an influence operation that's run by either everything from the Russian or Chinese government to sometimes these shady [00:06:00] PR firms for hire where you wanna be really careful about who you're actually working with.
Eliot Mizrachi: So it sounds like it refers to some of the tactics and machinery of propagating this information and using the algorithms that are supposed to show us things that are relevant to us but sort of gaming that in a way that, that makes this information more visible and accessible to the target audience.
Lisa Kaplan: Exactly. The real purpose of these networks is to make it so that the information that they are putting out there to advance their own goals. Often putting frankly, legitimate organizations and legitimate companies goals at risk as a result. It's really designed to try to get as many eyeballs on as possible to make something trend, to make somebody believe something that may not be true in order to influence their behavior.
So we definitely do see people evolving their tactics or different groups, whether it's again, state actors or just individuals who are trying to push their own agenda.
[00:07:00] Or sometimes we see political campaigns use these tactics. And that's true globally. But I would say that we really think about not just the pervasiveness and the accessibility of the tactics, but also just how the internet has changed and how people engage online is totally different today than it was in.
2019, even when I started the company. So just some context there in 2019. That was before. And I, I'm about to name a bunch of political actors. I don't mean this as political statements, but Elon Musk bought. Twitter, which is now X. And that platform has undergone a series of changes based on new leadership's priorities.
Truth social didn't really exist in the same way. Gab getter, four, chan, eight Kon. These these sources may have existed, but they weren't necessarily as popular. Blue sky didn't exist, Threads didn't exist. And so we used to talk a lot about how algorithms create silos, and that is still true. [00:08:00] But now you can also there's just a proliferation of platforms where users have migrated to.
TikTok for example is obviously a very prominent and very contentious platform in a lot of ways. And so I think what also happens is these actors who are trying to seed campaigns against you. They know that the, there are different user bases on each of these platforms, and they also know that companies typically don't have visibility into what's going on there.
Oftentimes what we will see is that these actors will post or see narratives on places like Telegram or other kind of harder to reach platforms with the hopes that real people are then bringing these narratives, getting influencers to talk about them on some of the platforms that people are more familiar with, like X and Reddit which are more designed to disseminate at scale.
And by the way, now have increasing importance because of the way that. Google [00:09:00] algorithms have changed and Grok and how some of these data sources are being used to train AI models. But we're increasingly seeing that is a key element of the threat. Just the fact that there's this proliferation of platforms.
The second thing that we're seeing as a key trend is the deterioration in trust in media has actually made a really big difference, I would say in the last three to four years. Back in 2019, when we caught something, we would just expose it, and that was enough because people trusted the New York Times, the Wall Street Journal, all of these sources they were credible in the eyes of many.
Now, what we know is that only 22% of Americans actually trust the media. 76% are actively avoiding it and 78% report getting their news through infotainment, and I think that's really important. When you think then, not just about the influence of some of these nefarious campaigns, but what is also the influence of influencers.
What is a, what are the right substack? What are the ones that seem to actually be resonating across these various [00:10:00] platforms? Same with podcasts, and so I think the whole information space has gotten a lot more complicated. The good news is because and I'm, you know, if you wanna riff on things that I think about at 3:00 AM in terms of scary memes and stuff like that, and nefarious campaigns call anytime.
But the reality is a lot of this is being propagated by ai. It can also be solved with ai. And so while there's more data than ever before, more I would say risk exposure than ever before for businesses, there's also the ability to really leverage some of the technology developments to be able to boil down what's out there, who's behind it, how is it spreading, what type of risk or frankly opportunity is it creating, and then what can you actually do about it?
Eliot Mizrachi: Let's zoom forward just to today. When you look out the window at the world of disinformation, what are the topics or themes
What are some of the issues that you're really seeing pop up on the [00:11:00] radar in a major way right now?
Lisa Kaplan: So I would say that, there were more executive orders in the first a hundred days than probably ever before.
Or at least that's what it feels like. And there's been a lot of change. The Trump administration is implementing the majority of the agenda that it promised that it would, and it's doing so via executive order. So one of the things that we're seeing is that is really, overwhelming for a lot of organizations that are trying to navigate change and understand what it means for them.
And one of the things that we're observing as we saw DE&I be a pretty significant issue for folks to figure out earlier, I would say January, February. Now we're spending a lot of times helping organizations understand some of the context around, for example tariffs and. Maha or the Make America Healthy again movement.
With the proliferation of platforms and just some of the different influencers that are out there and increase polarization in all these different [00:12:00] communities popping up, but not really talking to each other. It's been really fascinating to see some of the evolution of these groups. So when Tariffs, for example, were first announced.
What we saw was that. Users who were more probably politically right-leaning based on some of the conversations and the language that was being used, really supportive of the tariffs. People were excited, people were saying yes, let's make sure that we are, buckling down long term.
This is going to be a good thing. We're gonna have some short term pain for long-term gain here in the United States. We also then saw left-leaning users calling for investigations and oversight. Who is benefiting for this financially? Who are the people that. Like where is there some element of corruption?
We saw a lot of distrust in government. The idea that government could be acting on the interest of people that it has been elected to serve and more questions of, but who's making money off of this at the expense of the average American or [00:13:00] average global citizen? Some people were also more concerned about the global dynamics.
We did see a rise in anti-Americanism, especially when it came to certain corporate brands calls for boycotts globally. And I think for companies that had experienced that for other reasons, it was a particularly acute moment. And these communities kind of ebb and flow and change over time. So one thing that I found really fascinating is for a period of about 24 hours.
Amazon had exposed how much the tariff would cost and instead of just kind of hiding it all in one price, instead they exposed and they broke out. What would the cost of the tariff be? There was allegedly a phone call made from the president to Jeff Bezos, and then that went away. But one thing that we found.
Absolutely fascinating because a lot of times when we're looking at these different groups and different dynamics nobody ever agrees on everything. Everybody was excited about having tariffs and that [00:14:00] dollar amount exposed, but for different reasons. So the people who were saying like, yes, like we can do this.
It's short-term pain, long-term gain. The narratives that we were seeing were really focused on, thank God I finally know what's imported and now I can make sure that I'm buying American. So it almost played into some of these more, I would say, isolationist protectionists. Philosophies of giving people the insight and the choice that they need to make sure that their consumer behavior matches their values.
And then on the more political left, what we were seeing was make sure everybody knows how expensive this is ahead of the next election. So it was more kind of like the haha, you've been exposed, we've got you, like kind of rhetoric. And so, one of the things that we've learned is for whatever. For the ver whole variety of reasons if companies are considering and you know, private phone call from the president to your CEO aside, because that obviously could change the dynamic, but one of the things that we're seeing is that actually consumers do want that [00:15:00] information and that knowledge.
And so that was like an interesting learning for us is we've watched some of these narratives evolve on the tariff side. On the make America healthy again. Type narrative. So for those who aren't aware, make America healthy again, is a movement of I would say health and wellness influencers who are really focused on advocating for certain things like whether or not ingredients are safe to have in certain food products or vaccines or pharmaceutical products.
They're looking at things like food dyes and they're essentially saying, listen in the United States, we are a very sick country and a lot of that has to do with the food that we're eating and what we're putting into our bodies. How is it that regulation can really help with, making sure that the food that we are putting into our bodies is keeping us safe and healthy, and helping us to grow and not creating disease or exposing us to additional risk, et cetera.
I [00:16:00] think one of the challenges that a lot of brands and companies are having right now is they don't know if they're going to be targeted next because a lot of the things that we eat and a lot of the things that we consume do, contain these ingredients. We've seen, for example seed oils be something that the Make America Healthy again, community is very concerned about the potential health impacts.
I'm starting to see brands who probably never had seed oils in the first place start saying We are certified seed oil free. Or don't worry about coming to our hotel chain and having to eat seed oils and things like that. So we are starting to see some brands, proactively communicate an absence of some of these different some of these different efforts.
And then we are also seeing that the, there are brands that are deeply concerned because what does it mean then for their entire production, their supply chain, all that they've invested? If they have to get rid of some of these ingredients, will consumers still [00:17:00] like their products? Those types of things.
So companies are in a little bit of a. A rock and a hard place. I would say in between those two things. Because on the one hand, they obviously wanna make sure that they're in compliance with regulations coming out of the United States government. It's a huge market for them. I don't think anybody wants to leave it.
Many are headquartered in the United States, and frankly, I think a lot of foreign countries have depended on the United States to set the standards. They also though, are going to face significant costs and impacts if it all gets implemented overnight. They also have to maintain trust with their consumers.
They have to be able to make sure that they're still delivering a product that consumers enjoy and wanna try. And so we have seen everybody from pharmaceutical companies to CPG and QSR companies have to really figure out as they're getting targeted by these movements. What it is that they want to do in order to continue driving their business forward.
Eliot Mizrachi: I'm glad that you landed there, Lisa, because that's really [00:18:00] where I wanted to go with this question. So you've helped us understand I. The context right here are the issues and the tactics. This information, warfare that's happening and some of the issues that it's focused on.
But where you took it is brand ultimately,
I thought it was interesting that you said, , on the tariff issue it's sort of, what you see is in the eye of the beholder, right? Like the, there wasn't a single narrative emerging around that it seemed to be.
Different versions of a narrative based on the same set of core events. And I imagine that would be challenging for somebody whose responsibility is brand strategy, who wants to build trust with a diverse universe of stakeholders, but is operating in this kind of information environment. So how should CCOs be thinking about brand strategy in this reality?
Lisa Kaplan: So I've been thinking about this a lot because the CCO role has just evolved so much. I think in part because of the need to not just be the [00:19:00] person who's, you know, determining what the outward messaging and positioning should be, but this is the new preeminent risk facing organizations. It used to be the CISO's job.
It used to be, how do we get endpoint monitoring on all of our phones and laptops and make sure that our, the system and our ability to share information with each other is secure? There's no endpoint on the internet. The internet is endless. And so one of the things that we're seeing is that CCOs are the first phone call when something weird is happening online.
And the first thing that they need to be able to do is really just that start with the context. So is it who, is it actually a narrative or is it a data point? Is it just one post that somebody sent to the C? The CEO? And it looks really scary, but it's less than a fraction of the a percent of the conversation.
And if you do anything, you will [00:20:00] create the Streisand effect or. Maybe this is less than 1% of the conversation, but we know this is gonna be a big problem. And so that's when you wanna understand, well, what is actually out there? What does your current landscape look like? What are all of the narratives that are out there?
Who's behind this particular narrative? Because you will handle something differently if it is the Russian or the Chinese government. Versus some of these influencers who, you know, I don't wanna understate the power of some of these influencers. We've obviously seen that some influencers have significant influence over our, the UF government, for example, we saw that staffing changes have been made after influencers have been invited to have meetings with with the president.
And so. Understanding where you are at and where your brand perception is based on who's talking about it can also help you determine if it's a regulatory risk. Is this [00:21:00] a brand risk? Is this, frankly, I think a lot of people forget about employees in this moment. Is this going to create a recruitment or a retention issue?
Is it something that's going to create legal risks? So having the context of like how large it is and who's behind it is important, as well as how it's spreading. So there's a very big difference between how you would handle something that is completely organic. There is corporate outrage because a product wasn't safe or because a, key executive did something that they shouldn't have and had to be let go. There are certain scenarios that you know, as communicators, they're just a crisis and you know how to navigate them. There are other scenarios though, where you have to be able to deal with things that are. Sometimes have a kernel of truth, but it's missing a lot of context and that's why it's effective or it is challenging to be able to really make sense of what the impact is.
[00:22:00] And so looking at kind of like how it's spreading who it's reaching. Is there nefarious intent based on what mechanisms are being used to spread the narrative? That's the type of information. Communicators need because CCOs are smart people. The challenge is oftentimes they're flying blind and they don't have the answers to those questions.
And what's trending on Twitter or what's on the front page of the New York Times or the Wall Street Journal isn't the information that you need to actually navigate some of these sensitive situations.
Eliot Mizrachi: wanna double click.
you talked about early detection of emerging narratives. I just wonder what the hallmarks are of those things.
how would you determine this is something that has the potential to become larger versus this is something that's,an isolated bunch of crazy people and not something to worry about.
Lisa Kaplan: Yep. So there are a couple of key things that we look for. So first on what's actually a coordinated bot network, [00:23:00] or other tactic like that.
We've actually trained machine learning models to be able to identify what we call a tactics, techniques and procedures or TTPs to game curation algorithms. There's only so many ways that actors can gain curation algorithms. They're pretty sophisticated machine learning models. But if you spend too much time with me, you'll just start thinking of AI as fancy math, because that's what it is.
But we are able to essentially look at. Where are those behavior? Where are the account behaviors that signal somebody's doing something that they shouldn't be? So we're looking at everything from posting patterns to account creation, to how information is spreading across between groups of accounts over time, and really understanding that social graph through the lens of where are there patterns and anomalies that suggest it's non-human behavior.
Or otherwise coordinated. I'm also not one of those people that's we're all gonna be a bunch of AI agents talking to each other sort of things because there are certain decisions or [00:24:00] certain judgements that I just don't think AI should ever be trusted to make. And I can hear, you know, half of Silicon Valley yelling at me as I say that, but the reality is, making an attribution claim. So where something is coming from, whether or not accounts are authentic, we will surface those signals. But we'll then have a technical investigator who is an expert think more kind of the ethical hacker types to be able to actually go look to see and make sure that it's not real people.
And so that does sometimes take a little bit of digging. Sometimes it's super easy to do. So that's one way that we look at this. To your other question around emerging risks, we've actually had to create, and again, we're using machine learning to do this in an automated fashion, but risk scores. So, not every account online is going to create a significant risk for your business.
For example, I don't think that I have used anything except for LinkedIn in over a year. If I talk about your brand on X [00:25:00] or on Instagram or something like that, it's gonna have literally zero impact. But we also know who the people are that are influential. So this is where our data science team has been able to identify influencers based on engagement metrics based on who they seem to be popular with the type of content that they're posting, those sorts of things.
So all using publicly available information, we're able to actually start to get predictive of, okay. Yes. If an individual like Elon Musk or President Trump talks about you, it will go viral. No doubt about it. The engagement, the metrics, the, it's just, it's going to, and ride the wave. It's gonna be 24 hours.
Maybe it's even a good thing that they're saying, who knows? But we also look further upstream to understand who's influencing the influencers. So who are the micro influencers in specific communities that you might care about? Whether that's because they're your loyal customer base, [00:26:00] or they are.
Somehow then influential to one of these major influencers. So we're doing a lot of work in that regards to be able to pull out, hey, this looks like a risk versus, this is just somebody who maybe lost their luggage on a flight.
Eliot Mizrachi: Yeah. So I sort of had in my mind that part of the solution, the response would be exactly what you just said.
Let's identify the in influencers in this space and see if we can provide them with. Information so that they can start to counter that damaging narrative. But you said something interesting in the example that you gave, which was the course that they took was to go to a reporter. And essentially there was a story that, that exposed this and that was the deciding factor.
But you also said there's a lot of declining trust in legacy media. So I just wonder what your thought is on, is. Is it effective to go to more traditional media as a way to expose this? Is it more effective to identify, influencers in those [00:27:00] micro communities? Is it a combination of the two?
What's toolkit look like?
Lisa Kaplan: So there's definitely no one silver bullet on mitigation. Oftentimes, organizations need to consider multiple strategies either in parallel or sequenced. So for example when you are looking at traditional media, as you know, whether you're exposing something or determining how to.
respond and I'm actually of the belief too that certain substack are worth responding to. Just because otherwise people will write whatever story they wanna write. And some of these organizations have, more reach than traditional media, for example. I would consider them a quality media source, but the Bulwark, their podcast has a higher reach than Jake Tapper's podcast is something that I learned when I ran into those guys who I adore.
So the question I think is. What is, you're playing a game of chess, not checkers. So what's the right piece to move? Given the context, I looked at traditional media as being still a [00:28:00] validated third party source with editorial review. That if you are trying to reach your Wall Street analyst, if you are trying to reach your peer set, if you are trying to reach frankly, your employees a lot of them probably do trust what's in the media and that can have an impact.
It also creates a third party source that is generally trusted that you aren't able to pay the way that you can pay influencers should you need to be able to point to something in a congressional hearing, as reported by the Wall Street Journal or the New York Times, or the Washington Post. That's where you're playing that three dimensional chess, and it can be really helpful.
Same with a court of law if it's going to become a regulatory or legal issue. When it comes to your customers, every customer base is different. Even for those who are trying to be, you know, companies for an entire country or an entire world, there will be different communities who are more influential than others.
Look at your market research data. Try to understand what it is that is out there and I would [00:29:00] say really look to. Which influencers are actually important in those conversations. So when we get called in for things like activist, investor situations, short seller attacks, that sort of thing big PR firms will always be like, here's your list of beat reporters.
And we're like, well, yeah, and you should totally call 'em if you're trying to reach your wall Street analyst or whoever, it's at the hedge fund. But if you're playing hearts and minds, here's the influencers in the conversation. Who do you already know? It's hard to build that bridge in a crisis.
And so that's where understanding what conversations you want to be a part of and invest in at a steady state can be really critical to your influencer marketing strategy.
Eliot Mizrachi: Sure, monitoring still matters, but as Lisa points out, it's not just the message, it's the momentum behind it that makes the difference.
Lisa Kaplan: One is known risks, and those are the easiest to deal with, right? You know, something might be coming something's out there that doesn't make sense. You identified it through your media monitoring. [00:30:00] In those moments, what we're able to do is help you get that context of. Who's behind it? How is it spreading?
Is it impacting your core stakeholders? And what can you potentially do about it? I think one thing that we've seen be really powerful is not necessarily what communicators usually go to for their usual tool. Tools in the toolbox of posting something on social influencer marketing. Press strategy, that sort of thing.
But actually some of the more subtle and behind the scenes efforts. So for example, we work with an organization that is, has multinational organizations, offices all over the place. They're a very visible Western brand, and they started to be targeted within the country that they were operating. I'm gonna talk in circles a little bit here just to protect client confidentiality, but also, it all makes sense in a second. But when we so they identified something, we weren't actually originally even scoped to work with them in this capacity, but [00:31:00] we have multiple languages or technology works in multiple languages, and so they were being targeted by a narrative that was essentially alleging improper influence.
That they were doing, had shady business practices that they were trying to exert control over certain areas of this specific country. And it was all being posted mostly in the local language, a little bit of English, which is a commonly spoken language in the country as well. What we ended up seeing when we use our technology was that there was actually a pretty large bot network that was amplifying and using AI to spread.
These different messages and narratives. And so in this country it is not as, when we looked where it was coming from, it was actually coming from the host government. And so in this particular country, you do not wanna call out the host government for targeting your brand because that is a surefire way to get guys with guns to show up at your office.
Instead, what they did is they made the private phone call. They made the phone call actually to a reporter, an [00:32:00] international reporter to do a preemptive and a pre bunk. They fact checked it because they knew it would become a challenge. They knew that it was, the whole thing was ludicrous. And so a reporter actually exposed the narrative and they won the narrative because it was the largest piece of content in terms of engagement, in terms of reach and spread.
By a long shot. That type of thing works if you have the early indication and early insight, and you're willing to take that smart bet of like, all right, we're not gonna have our fingerprints on it, but we're just gonna go for it and see what happens. And then as a result, because these actors were using, or this government was using what's called coordinated and authentic behavior.
Those nefarious tactics to hide who you are and how you're spreading information. It was a violation of the platform's policies, and they took it down. The company never contacted them, never asked them for a takedown, so they didn't have to worry about any accusations of like censorship or anything like [00:33:00] that.
But the mere fact that it was reported by a major media outlet. Enabled that whole situation to go away. So that's one way that we partner. The other way though is we look for what we call the unknown. So this is where we've really invested in being able to understand where these risks and narratives start before they become the mainstream risks that you're seeing through your media monitoring tools.
Oftentimes what we find is that we will pick up on signals sometimes a week early before you're getting the outreach of the reporter that a narrative might be brewing. Or, , a lot of what we do too is make sure that we are tracking narratives as they emerge so that it isn't something that gets like, quote unquote swallowed up.
So it turns from an online conversation into perhaps a blog, into perhaps a. More, I would say like tier two, tier three media outlet, where there's more, I would say, bias than your typical tier ones. And it continues to get swallowed up into the tier ones because everybody's pointing backwards and citing the sources and the sources.
I. To find out that it started on social [00:34:00] media and that none of it is actually true. And so that's another thing that we help with is understanding where those risks are coming from so that you can put out more context so that you can make sure that you are proactively briefing key stakeholders, whether that's regulators or reporters, off the record, so that you can use that strategic intelligence to shape the narrative away from risk and, you know, essentially deflect and build that moat.
Eliot Mizrachi: And here's where AI comes in, not as the threat, but the defense. Even with all the AI and analytics, Lisa reminds us the real power lies in human judgment, and often the smartest move is knowing who to call and when.
Lisa Kaplan: So AI is fundamentally going to change the way that we all work. I think we know that in terms of how we're seeing AI impact this particular space on the risk side and how it's creating more risk [00:35:00] is, it has never been easier or faster. To create content and post it. We're actually working right now on an investigation around how LLMs appear to be being used on some of the different social media platforms that I will send to you team at page hopefully in the next couple of days.
But the idea is that the volume of content. Is increasing and it's increasing in sophistication. So back in 20 18, 20 19 when it was really just like shooting fish in a barrel, trying to find these kinds of networks, you would see all sorts of sloppy mistakes, particularly coming from state actors.
So Russia, China, Iran, because there were grammatical errors. There were contextual cultural things like. The ways that different cultures use emojis, for example, was a really big tell because Americans don't use a ton of emojis really in their like tweets at the time, the way that maybe like [00:36:00] others might or other cultures might, or EU might on WhatsApp for example.
So you would see a lot of that happening. The quality of campaigns have gotten better because one thing that LLMs are really good at is translation first. So you don't get those same kind of grammatical mistakes. One of my favorite ways that we call a Russian network is that the commas are actually different on a Russian Cyrillic keyboard versus an English keyboard, which is I realize a very nerdy thing to say on a podcast.
You're not gonna find that stuff anymore. They, it's like they have the best spell check in the world now. And they also don't necessarily have to supervise the LLMs so they can kind of set it and forget it because the reality is if it doesn't work, just delete it, take it down. And that's what these nefarious actors do.
So we see more information I think, out there. In terms of DeepFakes and things like that, we have not yet meaningfully seen a deepfake get deployed in a way that is, I would [00:37:00] say more effective than some of even just the text-based campaigns. I do think it's coming. Right now our human analysts who are again experts, are still better at identifying deepfake online than some of the technologies that are being developed because the DeepFakes themselves have a lot of very visible tells.
That being said, I think we're in a window of three to six months where the technology's gonna get way better and the human's not gonna be able to tell the difference between a deep fake and a real photo. They're also though, on the solution side. So you know, the world is falling, it's, everything is on fire.
Nothing anybody hasn't said before, but I am really focused on how do we actually solve this problem and what are the uses of AI to be able to detect this? And every company at the end of the day is going to end up being an AI company. It's gonna be about what data do you have and how do you analyze it to create an insight.
We look at again, our, the way our machine learning models are used is [00:38:00] to be able to pull out where is their coordination. Where are there different risk signals that are emerging? We're now experimenting with the agentic AI on how do you actually automate that mitigation workflow so that your time from detection to remediation is something that you can just set it and forget it for common issues.
We catch a spoof domain about you, no problem. The technology's automatically reporting it and then, you know, sending you an email saying, here's what's happening or not happening. So there are, I think, a lot of really amazing ways that we can use AI to solve the threat. It's gonna be a cat and mouse game.
The thing we're not talking about that is going to be the new AI in air quotes on conference circuits is quantum. And when that comes truly the ability to create random patterns the concern is that we'll break encryption. But quantum, I can see being something that really changes the game as well.
We're probably five to 10 years out from that being a significant risk. But it's something that we're already starting to talk [00:39:00] about and think about internally in terms of how could an actor potentially leverage quantum computing to even be able to hide their tracks even more than they're already able to do.
Lisa Kaplan: So. Lisa, if I'm a CCO and I've listened to this episode so far, I'm worried about whether or not I'm gonna sleep tonight.
Eliot Mizrachi: And I'm thinking there's certainly a lot of risk, but a lot of opportunity, for me to be a stronger leader for my enterprise in this area. What do CCOs need to understand or need to be doing or thinking about on behalf of their organization in order to manage this environment and the sort of the rapid pace of change that, that we're seeing.
Lisa Kaplan: So if you are a CCO and you are listening to this, I hope that this is the, like cup of chamomile tea. Tea or a warm glass of milk before bed that hopefully takes the temperature down a little bit and de-stresses. All of this can be solvable. I do think it's going to take an evaluation of your technology stack yes, social listening tools [00:40:00] are great for telling you.
How did the op-ed you got placed in the Wall Street Journal resonate with the, your various communities? I. It's not going to be. What does this early detection, early warning risk signal for you and I would look at how is it that you can both, as organizations are asked to do more with less how is it that you can leverage AI automation, get as much data as possible to focus on.
What are the signals that are out there that your team, your whole company, but also specifically the communications team, what is it that they need to know in order to be able to make the right decisions? Take as much of the, make sure that your team has as much time and information to make decisions to guide the organization.
And that can be done with the right tech stack because what technology is really good at is. What are the manual rote processes that [00:41:00] take a lot of time because there's a lot of data, that sort of thing, to how do you basically automate those functions? And what you're doing is you're creating leverage for yourself.
I also think that CCOs, and this is gonna be like a tough cultural change for the entire field, but we're seeing some real bright spots off to unlearn some habits that got them to where they are. Watch and wait is not always the right answer anymore because if you are not at the table, you are on the menu.
And that is true from a regulatory perspective. It is true on in chat rooms and forums and things like that where some of these narratives are being planned that then lead to physical protests, brand boycotts recruiting events at colleges being canceled. And so I think of it as don't wait until it's a forest fire.
Be aware of where all the little sparks, because you never know which spark is going to turn into a forest fire. We can predict. But this is where it's not to be [00:42:00] too dorky and academic about it, but until it happens, it's a prediction. And so, my advice is look at and start to think about the internet and online information ecosystem differently.
What's happening now on Reddit? Even just casual conversation that you might have thought you never needed to respond to, or put your own perspective out on Reddit that now feeds the Google algorithms. If somebody has asked you something online and they're saying, wait, is that real? The first thing they're gonna do is go Google it.
So what is it that you want to come up? Similarly, Wikipedia pages websites are being fed to open AI's models that's then impacting other search results. It may be impacting broader broader technology stack that you already have in place, but I think my advice is make sure that you have the insight and the context you need to make the decision.
And be really proactive on your own brand management.
Eliot Mizrachi: That's our show. This information isn't [00:43:00] going away, but as Lisa showed us, neither is the communicator's power to make sense of it. With the right mix of tools, timing, and judgment, we can do more than react. We can lead with clarity and maybe even restore a little trust. That might mean staying silent. It might mean issuing a statement, calling a reporter, or briefing your board, but whatever the tactic, it starts with understanding the risk early
brand protection Today is as much about strategic foresight as it is about the response. I'm Eliot Mizrachi. Thanks for listening to The New CCO.